35 research outputs found

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    COVID-19 symptoms at hospital admission vary with age and sex: results from the ISARIC prospective multinational observational study

    Get PDF
    Background: The ISARIC prospective multinational observational study is the largest cohort of hospitalized patients with COVID-19. We present relationships of age, sex, and nationality to presenting symptoms. Methods: International, prospective observational study of 60 109 hospitalized symptomatic patients with laboratory-confirmed COVID-19 recruited from 43 countries between 30 January and 3 August 2020. Logistic regression was performed to evaluate relationships of age and sex to published COVID-19 case definitions and the most commonly reported symptoms. Results: ‘Typical’ symptoms of fever (69%), cough (68%) and shortness of breath (66%) were the most commonly reported. 92% of patients experienced at least one of these. Prevalence of typical symptoms was greatest in 30- to 60-year-olds (respectively 80, 79, 69%; at least one 95%). They were reported less frequently in children (≀ 18 years: 69, 48, 23; 85%), older adults (≄ 70 years: 61, 62, 65; 90%), and women (66, 66, 64; 90%; vs. men 71, 70, 67; 93%, each P < 0.001). The most common atypical presentations under 60 years of age were nausea and vomiting and abdominal pain, and over 60 years was confusion. Regression models showed significant differences in symptoms with sex, age and country. Interpretation: This international collaboration has allowed us to report reliable symptom data from the largest cohort of patients admitted to hospital with COVID-19. Adults over 60 and children admitted to hospital with COVID-19 are less likely to present with typical symptoms. Nausea and vomiting are common atypical presentations under 30 years. Confusion is a frequent atypical presentation of COVID-19 in adults over 60 years. Women are less likely to experience typical symptoms than men

    CernVM Workshop 2019

    No full text
    In preparation for the post-LHC era, the Future Circular Collider (FCC) Collaboration is undertaking design studies for multiple accelerator projects with emphasis on proton-proton and electron-positron high-energy frontier machines. From the beginning of the collaboration, the development of a software stack with common and interchangeable packages plays an important role in simulation, reconstruction or analysis studies. Similarly to the existing LHC experiments, these packages need to be built and deployed for different platforms and compilers. For these tasks FCC relies on Spack, a new package manager tool recently adopted by other experiments inside the High-Energy Physics (HEP) community. Despite its warm adoption, the integration of Spack with CVMFS for software distribution, while already possible, exposes some limitations. This talk provides an overview of these difficulties in the context of the FCC Software

    Parallelization and optimization of a High Energy Physics analysis with ROOT’s RDataFrame and Spark

    No full text
    After a remarkable era plenty of great discoveries, particle physics has an ambitious and broad experimental program aiming to expand the limits of our knowledge about the nature of our universe. The roadmap for the coming decades is full of big challenges with the Large Hadron Collider (LHC) at the forefront of scientific research. The discovery of the Higgs boson in 2012 is a major milestone in the history of physics that has inspired many new theories based on its presence. Located at CERN, the European Organization for Nuclear Research, the LHC will remain the most powerful accelerator in the world for years to come. In order to extend its discovery potential, a major hardware upgrade will take place in the 2020s to increase the number of collisions produced by a factor of five beyond its design value. The upgraded version of the LHC, called High-Luminosity LHC (HL-LHC) is expected to produce some 30 times more data than the LHC has currently produced. As the total amount of LHC data already collected is close to an exabyte (101810^{18} bytes), the foreseen evolution in hardware technology will not be enough to face the challenge of processing those increasing volumes of data. Therefore, software will have to cover that gap: there is a need for tools to easily express physics analyses in a high-level way, as well as to automatically parallelize those analyses on new parallel and distributed infrastructures. The High Energy Physics (HEP) community has developed specialized solutions for processing experiments data over decades. However, HEP data sizes are becoming more common in other fields. Recent breakthroughs in Big Data and Cloud platforms inquire whether such technologies could be applied to the domain of physics data analysis. This thesis is carried out in the context of a collaboration between different CERN departments with the aim of providing web-based interactive services as the entry-point for scientists to cutting-edge distributed data processing frameworks such as Spark. In such context, this thesis aims to exploit the aforementioned services to run a real analysis of the CERN TOTEM experiment on 4.7TB of data. In addition, this thesis explores the benefits of a new high-level programming model for physics analysis, called RDataFrame, by translating the original code of the TOTEM analysis to use RDataFrame. Following sections describe for the first time the detailed process of translating a data analysis of this magnitude to the programming model offered by RDataFrame. Moreover, we compare the performance between both codes and provide results gathered from local and distributed analyses. Results are promising and show how the processing time of the dataset can be reduced by multiple order of magnitude with the new analysis model

    Modern Software Stack Building for HEP

    Get PDF
    High-Energy Physics has evolved a rich set of software packages that need to work harmoniously to carry out the key software tasks needed by experiments. The problem of consistently building and deploying these packages as a coherent software stack is one that is shared across the HEP community. To that end the HEP Software Foundation Packaging Working Group has worked to identify common solutions that can be used across experiments, with an emphasis on consistent, reproducible builds and easy deployment into CernVM-FS or containers via CI systems. We based our approach on well-identified use cases and requirements from many experiments. In this paper we summarise the work of the group in the last year and how we have explored various approaches based on package managers from industry and the scientific computing community. We give details about a solution based on the Spack package manager which has been used to build the software required by the SuperNEMO and FCC experiments and trialled for a multi-experiment software stack, Key4hep. We shall discuss changes that needed to be made to Spack to satisfy all our requirements. We show how support for a build environment for software developers is provided

    Distributed data analysis with ROOT RDataFrame

    Get PDF
    Widespread distributed processing of big datasets has been around for more than a decade now thanks to Hadoop, but only recently higher-level abstractions have been proposed for programmers to easily operate on those datasets, e.g. Spark. ROOT has joined that trend with its RDataFrame tool for declarative analysis, which currently supports local multi-threaded parallelisation. However, RDataFrame’s programming model is general enough to accommodate multiple implementations or backends: users could write their code once and execute it as-is locally or distributedly, just by selecting the corresponding backend. This abstract introduces PyRDF, a new python library developed on top of RDataFrame to seamlessly switch from local to distributed environments with no changes in the application code. In addition, PyRDF has been integrated with a service for web-based analysis, SWAN, where users can dynamically plug in new resources, as well as write, execute, monitor and debug distributed applications via an intuitive interface

    Modern Software Stack Building for HEP

    No full text
    High-Energy Physics has evolved a rich set of software packages that need to work harmoniously to carry out the key software tasks needed by experiments. The problem of consistently building and deploying these packages as a coherent software stack is one that is shared across the HEP community. To that end the HEP Software Foundation Packaging Working Group has worked to identify common solutions that can be used across experiments, with an emphasis on consistent, reproducible builds and easy deployment into CernVM-FS or containers via CI systems. We based our approach on well-identified use cases and requirements from many experiments. In this paper we summarise the work of the group in the last year and how we have explored various approaches based on package managers from industry and the scientific computing community. We give details about a solution based on the Spack package manager which has been used to build the software required by the SuperNEMO and FCC experiments and trialled for a multi-experiment software stack, Key4hep. We shall discuss changes that needed to be made to Spack to satisfy all our requirements. We show how support for a build environment for software developers is provided

    Estudio numérico del sistema de dirección para un vehículo arenero

    No full text
    Diseño asistido por computadora del sistema de dirección de vehículos automotore

    Distributed data analysis with ROOT RDataFrame

    No full text
    Widespread distributed processing of big datasets has been around for more than a decade now thanks to Hadoop, but only recently higher-level abstractions have been proposed for programmers to easily operate on those datasets, e.g. Spark. ROOT has joined that trend with its RDataFrame tool for declarative analysis, which currently supports local multi-threaded parallelisation. However, RDataFrame’s programming model is general enough to accommodate multiple implementations or backends: users could write their code once and execute it as-is locally or distributedly, just by selecting the corresponding backend. This abstract introduces PyRDF, a new python library developed on top of RDataFrame to seamlessly switch from local to distributed environments with no changes in the application code. In addition, PyRDF has been integrated with a service for web-based analysis, SWAN, where users can dynamically plug in new resources, as well as write, execute, monitor and debug distributed applications via an intuitive interface

    Vehicle Maneuver Detection with Accelerometer-Based Classification

    No full text
    In the mobile computing era, smartphones have become instrumental tools to develop innovative mobile context-aware systems. In that sense, their usage in the vehicular domain eases the development of novel and personal transportation solutions. In this frame, the present work introduces an innovative mechanism to perceive the current kinematic state of a vehicle on the basis of the accelerometer data from a smartphone mounted in the vehicle. Unlike previous proposals, the introduced architecture targets the computational limitations of such devices to carry out the detection process following an incremental approach. For its realization, we have evaluated different classification algorithms to act as agents within the architecture. Finally, our approach has been tested with a real-world dataset collected by means of the ad hoc mobile application developed
    corecore